Topic: Machine Learning in Breast Cancer Diagnostic Dataset¶

The primary objective of this project is to leverage the WDBC dataset to determine whether certain features hold greater significance than others in the diagnosis of tumor malignancy.¶

Wisconsin Diagnostic Breast Cancer (WDBC) Data: https://archive.ics.uci.edu/dataset/17/breast+cancer+wisconsin+diagnostic

In [1]:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from time import time
from IPython.display import display
import seaborn as sns

import matplotlib.pyplot as plt

import warnings
warnings.filterwarnings('ignore')

from sklearn.preprocessing import StandardScaler

from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC 


from sklearn.model_selection import GridSearchCV, train_test_split
from sklearn.metrics import make_scorer, fbeta_score, accuracy_score, precision_score

%matplotlib inline
In [2]:
ls
Breast Cancer Wisconsin (Diagnostic) - UCI Machine Learning Repository.pdf
Task3.pdf
WDBC_ML_Project.ipynb
wdbc_data/
In [3]:
ls wdbc_data/
wdbc.data*  wdbc.names*
In [4]:
cat wdbc_data/wdbc.names
1. Title: Wisconsin Diagnostic Breast Cancer (WDBC)

2. Source Information

a) Creators: 

	Dr. William H. Wolberg, General Surgery Dept., University of
	Wisconsin,  Clinical Sciences Center, Madison, WI 53792
	wolberg@eagle.surgery.wisc.edu

	W. Nick Street, Computer Sciences Dept., University of
	Wisconsin, 1210 West Dayton St., Madison, WI 53706
	street@cs.wisc.edu  608-262-6619

	Olvi L. Mangasarian, Computer Sciences Dept., University of
	Wisconsin, 1210 West Dayton St., Madison, WI 53706
	olvi@cs.wisc.edu 

b) Donor: Nick Street

c) Date: November 1995

3. Past Usage:

first usage:

	W.N. Street, W.H. Wolberg and O.L. Mangasarian 
	Nuclear feature extraction for breast tumor diagnosis.
	IS&T/SPIE 1993 International Symposium on Electronic Imaging: Science
	and Technology, volume 1905, pages 861-870, San Jose, CA, 1993.

OR literature:

	O.L. Mangasarian, W.N. Street and W.H. Wolberg. 
	Breast cancer diagnosis and prognosis via linear programming. 
	Operations Research, 43(4), pages 570-577, July-August 1995.

Medical literature:

	W.H. Wolberg, W.N. Street, and O.L. Mangasarian. 
	Machine learning techniques to diagnose breast cancer from
	fine-needle aspirates.  
	Cancer Letters 77 (1994) 163-171.

	W.H. Wolberg, W.N. Street, and O.L. Mangasarian. 
	Image analysis and machine learning applied to breast cancer
	diagnosis and prognosis.  
	Analytical and Quantitative Cytology and Histology, Vol. 17
	No. 2, pages 77-87, April 1995. 

	W.H. Wolberg, W.N. Street, D.M. Heisey, and O.L. Mangasarian. 
	Computerized breast cancer diagnosis and prognosis from fine
	needle aspirates.  
	Archives of Surgery 1995;130:511-516.

	W.H. Wolberg, W.N. Street, D.M. Heisey, and O.L. Mangasarian. 
	Computer-derived nuclear features distinguish malignant from
	benign breast cytology.  
	Human Pathology, 26:792--796, 1995.

See also:
	http://www.cs.wisc.edu/~olvi/uwmp/mpml.html
	http://www.cs.wisc.edu/~olvi/uwmp/cancer.html

Results:

	- predicting field 2, diagnosis: B = benign, M = malignant
	- sets are linearly separable using all 30 input features
	- best predictive accuracy obtained using one separating plane
		in the 3-D space of Worst Area, Worst Smoothness and
		Mean Texture.  Estimated accuracy 97.5% using repeated
		10-fold crossvalidations.  Classifier has correctly
		diagnosed 176 consecutive new patients as of November
		1995. 

4. Relevant information

	Features are computed from a digitized image of a fine needle
	aspirate (FNA) of a breast mass.  They describe
	characteristics of the cell nuclei present in the image.
	A few of the images can be found at
	http://www.cs.wisc.edu/~street/images/

	Separating plane described above was obtained using
	Multisurface Method-Tree (MSM-T) [K. P. Bennett, "Decision Tree
	Construction Via Linear Programming." Proceedings of the 4th
	Midwest Artificial Intelligence and Cognitive Science Society,
	pp. 97-101, 1992], a classification method which uses linear
	programming to construct a decision tree.  Relevant features
	were selected using an exhaustive search in the space of 1-4
	features and 1-3 separating planes.

	The actual linear program used to obtain the separating plane
	in the 3-dimensional space is that described in:
	[K. P. Bennett and O. L. Mangasarian: "Robust Linear
	Programming Discrimination of Two Linearly Inseparable Sets",
	Optimization Methods and Software 1, 1992, 23-34].


	This database is also available through the UW CS ftp server:

	ftp ftp.cs.wisc.edu
	cd math-prog/cpo-dataset/machine-learn/WDBC/

5. Number of instances: 569 

6. Number of attributes: 32 (ID, diagnosis, 30 real-valued input features)

7. Attribute information

1) ID number
2) Diagnosis (M = malignant, B = benign)
3-32)

Ten real-valued features are computed for each cell nucleus:

	a) radius (mean of distances from center to points on the perimeter)
	b) texture (standard deviation of gray-scale values)
	c) perimeter
	d) area
	e) smoothness (local variation in radius lengths)
	f) compactness (perimeter^2 / area - 1.0)
	g) concavity (severity of concave portions of the contour)
	h) concave points (number of concave portions of the contour)
	i) symmetry 
	j) fractal dimension ("coastline approximation" - 1)

Several of the papers listed above contain detailed descriptions of
how these features are computed. 

The mean, standard error, and "worst" or largest (mean of the three
largest values) of these features were computed for each image,
resulting in 30 features.  For instance, field 3 is Mean Radius, field
13 is Radius SE, field 23 is Worst Radius.

All feature values are recoded with four significant digits.

8. Missing attribute values: none

9. Class distribution: 357 benign, 212 malignant

Attribute information

1) ID number 2) Diagnosis (M = malignant, B = benign) 3-32)

Ten real-valued features are computed for each cell nucleus:

a) radius (mean of distances from center to points on the perimeter)
b) texture (standard deviation of gray-scale values)
c) perimeter
d) area
e) smoothness (local variation in radius lengths)
f) compactness (perimeter^2 / area - 1.0)
g) concavity (severity of concave portions of the contour)
h) concave points (number of concave portions of the contour)
i) symmetry 
j) fractal dimension ("coastline approximation" - 1)

Several of the papers listed above contain detailed descriptions of how these features are computed.

The mean, standard error, and "worst" or largest (mean of the three largest values) of these features were computed for each image, resulting in 30 features. For instance, field 3 is Mean Radius, field 13 is Radius SE, field 23 is Worst Radius.

All feature values are recoded with four significant digits.

In [5]:
columns_name = ["ID","Diagnosis","radius-M","texture-M","perimeter-M","area-M",
                "smoothness-M","compactness-M","concavity-M","concave_points-M","symmetry-M","fractal_dimension-M",
                "radius-SE","texture-SE","perimeter-SE","area-SE",
                "smoothness-SE","compactness-SE","concavity-SE","concave_points-SE","symmetry-SE","fractal_dimension-SE",
               "radius-W","texture-W","perimeter-W","area-W",
                "smoothness-W","compactness-W","concavity-W","concave_points-W","symmetry-W","fractal_dimension-W",]
In [6]:
wdbc_df = pd.read_table('wdbc_data/wdbc.data',
                          sep=',', names = columns_name)
In [7]:
wdbc_df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 569 entries, 0 to 568
Data columns (total 32 columns):
 #   Column                Non-Null Count  Dtype  
---  ------                --------------  -----  
 0   ID                    569 non-null    int64  
 1   Diagnosis             569 non-null    object 
 2   radius-M              569 non-null    float64
 3   texture-M             569 non-null    float64
 4   perimeter-M           569 non-null    float64
 5   area-M                569 non-null    float64
 6   smoothness-M          569 non-null    float64
 7   compactness-M         569 non-null    float64
 8   concavity-M           569 non-null    float64
 9   concave_points-M      569 non-null    float64
 10  symmetry-M            569 non-null    float64
 11  fractal_dimension-M   569 non-null    float64
 12  radius-SE             569 non-null    float64
 13  texture-SE            569 non-null    float64
 14  perimeter-SE          569 non-null    float64
 15  area-SE               569 non-null    float64
 16  smoothness-SE         569 non-null    float64
 17  compactness-SE        569 non-null    float64
 18  concavity-SE          569 non-null    float64
 19  concave_points-SE     569 non-null    float64
 20  symmetry-SE           569 non-null    float64
 21  fractal_dimension-SE  569 non-null    float64
 22  radius-W              569 non-null    float64
 23  texture-W             569 non-null    float64
 24  perimeter-W           569 non-null    float64
 25  area-W                569 non-null    float64
 26  smoothness-W          569 non-null    float64
 27  compactness-W         569 non-null    float64
 28  concavity-W           569 non-null    float64
 29  concave_points-W      569 non-null    float64
 30  symmetry-W            569 non-null    float64
 31  fractal_dimension-W   569 non-null    float64
dtypes: float64(30), int64(1), object(1)
memory usage: 142.4+ KB
In [8]:
wdbc_df.head()
Out[8]:
ID Diagnosis radius-M texture-M perimeter-M area-M smoothness-M compactness-M concavity-M concave_points-M ... radius-W texture-W perimeter-W area-W smoothness-W compactness-W concavity-W concave_points-W symmetry-W fractal_dimension-W
0 842302 M 17.99 10.38 122.80 1001.0 0.11840 0.27760 0.3001 0.14710 ... 25.38 17.33 184.60 2019.0 0.1622 0.6656 0.7119 0.2654 0.4601 0.11890
1 842517 M 20.57 17.77 132.90 1326.0 0.08474 0.07864 0.0869 0.07017 ... 24.99 23.41 158.80 1956.0 0.1238 0.1866 0.2416 0.1860 0.2750 0.08902
2 84300903 M 19.69 21.25 130.00 1203.0 0.10960 0.15990 0.1974 0.12790 ... 23.57 25.53 152.50 1709.0 0.1444 0.4245 0.4504 0.2430 0.3613 0.08758
3 84348301 M 11.42 20.38 77.58 386.1 0.14250 0.28390 0.2414 0.10520 ... 14.91 26.50 98.87 567.7 0.2098 0.8663 0.6869 0.2575 0.6638 0.17300
4 84358402 M 20.29 14.34 135.10 1297.0 0.10030 0.13280 0.1980 0.10430 ... 22.54 16.67 152.20 1575.0 0.1374 0.2050 0.4000 0.1625 0.2364 0.07678

5 rows × 32 columns

In [9]:
wdbc_df.describe().T
Out[9]:
count mean std min 25% 50% 75% max
ID 569.0 3.037183e+07 1.250206e+08 8670.000000 869218.000000 906024.000000 8.813129e+06 9.113205e+08
radius-M 569.0 1.412729e+01 3.524049e+00 6.981000 11.700000 13.370000 1.578000e+01 2.811000e+01
texture-M 569.0 1.928965e+01 4.301036e+00 9.710000 16.170000 18.840000 2.180000e+01 3.928000e+01
perimeter-M 569.0 9.196903e+01 2.429898e+01 43.790000 75.170000 86.240000 1.041000e+02 1.885000e+02
area-M 569.0 6.548891e+02 3.519141e+02 143.500000 420.300000 551.100000 7.827000e+02 2.501000e+03
smoothness-M 569.0 9.636028e-02 1.406413e-02 0.052630 0.086370 0.095870 1.053000e-01 1.634000e-01
compactness-M 569.0 1.043410e-01 5.281276e-02 0.019380 0.064920 0.092630 1.304000e-01 3.454000e-01
concavity-M 569.0 8.879932e-02 7.971981e-02 0.000000 0.029560 0.061540 1.307000e-01 4.268000e-01
concave_points-M 569.0 4.891915e-02 3.880284e-02 0.000000 0.020310 0.033500 7.400000e-02 2.012000e-01
symmetry-M 569.0 1.811619e-01 2.741428e-02 0.106000 0.161900 0.179200 1.957000e-01 3.040000e-01
fractal_dimension-M 569.0 6.279761e-02 7.060363e-03 0.049960 0.057700 0.061540 6.612000e-02 9.744000e-02
radius-SE 569.0 4.051721e-01 2.773127e-01 0.111500 0.232400 0.324200 4.789000e-01 2.873000e+00
texture-SE 569.0 1.216853e+00 5.516484e-01 0.360200 0.833900 1.108000 1.474000e+00 4.885000e+00
perimeter-SE 569.0 2.866059e+00 2.021855e+00 0.757000 1.606000 2.287000 3.357000e+00 2.198000e+01
area-SE 569.0 4.033708e+01 4.549101e+01 6.802000 17.850000 24.530000 4.519000e+01 5.422000e+02
smoothness-SE 569.0 7.040979e-03 3.002518e-03 0.001713 0.005169 0.006380 8.146000e-03 3.113000e-02
compactness-SE 569.0 2.547814e-02 1.790818e-02 0.002252 0.013080 0.020450 3.245000e-02 1.354000e-01
concavity-SE 569.0 3.189372e-02 3.018606e-02 0.000000 0.015090 0.025890 4.205000e-02 3.960000e-01
concave_points-SE 569.0 1.179614e-02 6.170285e-03 0.000000 0.007638 0.010930 1.471000e-02 5.279000e-02
symmetry-SE 569.0 2.054230e-02 8.266372e-03 0.007882 0.015160 0.018730 2.348000e-02 7.895000e-02
fractal_dimension-SE 569.0 3.794904e-03 2.646071e-03 0.000895 0.002248 0.003187 4.558000e-03 2.984000e-02
radius-W 569.0 1.626919e+01 4.833242e+00 7.930000 13.010000 14.970000 1.879000e+01 3.604000e+01
texture-W 569.0 2.567722e+01 6.146258e+00 12.020000 21.080000 25.410000 2.972000e+01 4.954000e+01
perimeter-W 569.0 1.072612e+02 3.360254e+01 50.410000 84.110000 97.660000 1.254000e+02 2.512000e+02
area-W 569.0 8.805831e+02 5.693570e+02 185.200000 515.300000 686.500000 1.084000e+03 4.254000e+03
smoothness-W 569.0 1.323686e-01 2.283243e-02 0.071170 0.116600 0.131300 1.460000e-01 2.226000e-01
compactness-W 569.0 2.542650e-01 1.573365e-01 0.027290 0.147200 0.211900 3.391000e-01 1.058000e+00
concavity-W 569.0 2.721885e-01 2.086243e-01 0.000000 0.114500 0.226700 3.829000e-01 1.252000e+00
concave_points-W 569.0 1.146062e-01 6.573234e-02 0.000000 0.064930 0.099930 1.614000e-01 2.910000e-01
symmetry-W 569.0 2.900756e-01 6.186747e-02 0.156500 0.250400 0.282200 3.179000e-01 6.638000e-01
fractal_dimension-W 569.0 8.394582e-02 1.806127e-02 0.055040 0.071460 0.080040 9.208000e-02 2.075000e-01
In [10]:
wdbc_df.isnull().values.any() 
Out[10]:
False
In [11]:
wdbc_df.shape
Out[11]:
(569, 32)
In [12]:
malignant = (wdbc_df["Diagnosis"] == "M").sum()
malignant
Out[12]:
212
In [13]:
benign = (wdbc_df["Diagnosis"] == "B").sum()
benign
Out[13]:
357

Look at the proportion of benign and malignant tumors with a pie chart¶

In [14]:
labels = ['malignant','benign']
sizes = [malignant,benign]
#fig,ax = plt.subplots(figsize=(6,3))
plt.pie(sizes, labels=labels, autopct='%1.1f%%')
plt.title('Percentage of malignant and benign tumors.')
plt.show()

Correlation matrix¶

In [15]:
wdbc_df = wdbc_df.drop('ID', axis = 1)
In [16]:
wdbc_df['Diagnosis'] = wdbc_df['Diagnosis'].map({'B':0, 'M':1})
# Now, the 'diagnosis' column contains 0 for benign and 1 for malignant
In [17]:
# Calculate the correlation matrix
correlation_matrix = wdbc_df.corr()
In [18]:
correlation_matrix
Out[18]:
Diagnosis radius-M texture-M perimeter-M area-M smoothness-M compactness-M concavity-M concave_points-M symmetry-M ... radius-W texture-W perimeter-W area-W smoothness-W compactness-W concavity-W concave_points-W symmetry-W fractal_dimension-W
Diagnosis 1.000000 0.730029 0.415185 0.742636 0.708984 0.358560 0.596534 0.696360 0.776614 0.330499 ... 0.776454 0.456903 0.782914 0.733825 0.421465 0.590998 0.659610 0.793566 0.416294 0.323872
radius-M 0.730029 1.000000 0.323782 0.997855 0.987357 0.170581 0.506124 0.676764 0.822529 0.147741 ... 0.969539 0.297008 0.965137 0.941082 0.119616 0.413463 0.526911 0.744214 0.163953 0.007066
texture-M 0.415185 0.323782 1.000000 0.329533 0.321086 -0.023389 0.236702 0.302418 0.293464 0.071401 ... 0.352573 0.912045 0.358040 0.343546 0.077503 0.277830 0.301025 0.295316 0.105008 0.119205
perimeter-M 0.742636 0.997855 0.329533 1.000000 0.986507 0.207278 0.556936 0.716136 0.850977 0.183027 ... 0.969476 0.303038 0.970387 0.941550 0.150549 0.455774 0.563879 0.771241 0.189115 0.051019
area-M 0.708984 0.987357 0.321086 0.986507 1.000000 0.177028 0.498502 0.685983 0.823269 0.151293 ... 0.962746 0.287489 0.959120 0.959213 0.123523 0.390410 0.512606 0.722017 0.143570 0.003738
smoothness-M 0.358560 0.170581 -0.023389 0.207278 0.177028 1.000000 0.659123 0.521984 0.553695 0.557775 ... 0.213120 0.036072 0.238853 0.206718 0.805324 0.472468 0.434926 0.503053 0.394309 0.499316
compactness-M 0.596534 0.506124 0.236702 0.556936 0.498502 0.659123 1.000000 0.883121 0.831135 0.602641 ... 0.535315 0.248133 0.590210 0.509604 0.565541 0.865809 0.816275 0.815573 0.510223 0.687382
concavity-M 0.696360 0.676764 0.302418 0.716136 0.685983 0.521984 0.883121 1.000000 0.921391 0.500667 ... 0.688236 0.299879 0.729565 0.675987 0.448822 0.754968 0.884103 0.861323 0.409464 0.514930
concave_points-M 0.776614 0.822529 0.293464 0.850977 0.823269 0.553695 0.831135 0.921391 1.000000 0.462497 ... 0.830318 0.292752 0.855923 0.809630 0.452753 0.667454 0.752399 0.910155 0.375744 0.368661
symmetry-M 0.330499 0.147741 0.071401 0.183027 0.151293 0.557775 0.602641 0.500667 0.462497 1.000000 ... 0.185728 0.090651 0.219169 0.177193 0.426675 0.473200 0.433721 0.430297 0.699826 0.438413
fractal_dimension-M -0.012838 -0.311631 -0.076437 -0.261477 -0.283110 0.584792 0.565369 0.336783 0.166917 0.479921 ... -0.253691 -0.051269 -0.205151 -0.231854 0.504942 0.458798 0.346234 0.175325 0.334019 0.767297
radius-SE 0.567134 0.679090 0.275869 0.691765 0.732562 0.301467 0.497473 0.631925 0.698050 0.303379 ... 0.715065 0.194799 0.719684 0.751548 0.141919 0.287103 0.380585 0.531062 0.094543 0.049559
texture-SE -0.008303 -0.097317 0.386358 -0.086761 -0.066280 0.068406 0.046205 0.076218 0.021480 0.128053 ... -0.111690 0.409003 -0.102242 -0.083195 -0.073658 -0.092439 -0.068956 -0.119638 -0.128215 -0.045655
perimeter-SE 0.556141 0.674172 0.281673 0.693135 0.726628 0.296092 0.548905 0.660391 0.710650 0.313893 ... 0.697201 0.200371 0.721031 0.730713 0.130054 0.341919 0.418899 0.554897 0.109930 0.085433
area-SE 0.548236 0.735864 0.259845 0.744983 0.800086 0.246552 0.455653 0.617427 0.690299 0.223970 ... 0.757373 0.196497 0.761213 0.811408 0.125389 0.283257 0.385100 0.538166 0.074126 0.017539
smoothness-SE -0.067016 -0.222600 0.006614 -0.202694 -0.166777 0.332375 0.135299 0.098564 0.027653 0.187321 ... -0.230691 -0.074743 -0.217304 -0.182195 0.314457 -0.055558 -0.058298 -0.102007 -0.107342 0.101480
compactness-SE 0.292999 0.206000 0.191975 0.250744 0.212583 0.318943 0.738722 0.670279 0.490424 0.421659 ... 0.204607 0.143003 0.260516 0.199371 0.227394 0.678780 0.639147 0.483208 0.277878 0.590973
concavity-SE 0.253730 0.194204 0.143293 0.228082 0.207660 0.248396 0.570517 0.691270 0.439167 0.342627 ... 0.186904 0.100241 0.226680 0.188353 0.168481 0.484858 0.662564 0.440472 0.197788 0.439329
concave_points-SE 0.408042 0.376169 0.163851 0.407217 0.372320 0.380676 0.642262 0.683260 0.615634 0.393298 ... 0.358127 0.086741 0.394999 0.342271 0.215351 0.452888 0.549592 0.602450 0.143116 0.310655
symmetry-SE -0.006522 -0.104321 0.009127 -0.081629 -0.072497 0.200774 0.229977 0.178009 0.095351 0.449137 ... -0.128121 -0.077473 -0.103753 -0.110343 -0.012662 0.060255 0.037119 -0.030413 0.389402 0.078079
fractal_dimension-SE 0.077972 -0.042641 0.054458 -0.005523 -0.019887 0.283607 0.507318 0.449301 0.257584 0.331786 ... -0.037488 -0.003195 -0.001000 -0.022736 0.170568 0.390159 0.379975 0.215204 0.111094 0.591328
radius-W 0.776454 0.969539 0.352573 0.969476 0.962746 0.213120 0.535315 0.688236 0.830318 0.185728 ... 1.000000 0.359921 0.993708 0.984015 0.216574 0.475820 0.573975 0.787424 0.243529 0.093492
texture-W 0.456903 0.297008 0.912045 0.303038 0.287489 0.036072 0.248133 0.299879 0.292752 0.090651 ... 0.359921 1.000000 0.365098 0.345842 0.225429 0.360832 0.368366 0.359755 0.233027 0.219122
perimeter-W 0.782914 0.965137 0.358040 0.970387 0.959120 0.238853 0.590210 0.729565 0.855923 0.219169 ... 0.993708 0.365098 1.000000 0.977578 0.236775 0.529408 0.618344 0.816322 0.269493 0.138957
area-W 0.733825 0.941082 0.343546 0.941550 0.959213 0.206718 0.509604 0.675987 0.809630 0.177193 ... 0.984015 0.345842 0.977578 1.000000 0.209145 0.438296 0.543331 0.747419 0.209146 0.079647
smoothness-W 0.421465 0.119616 0.077503 0.150549 0.123523 0.805324 0.565541 0.448822 0.452753 0.426675 ... 0.216574 0.225429 0.236775 0.209145 1.000000 0.568187 0.518523 0.547691 0.493838 0.617624
compactness-W 0.590998 0.413463 0.277830 0.455774 0.390410 0.472468 0.865809 0.754968 0.667454 0.473200 ... 0.475820 0.360832 0.529408 0.438296 0.568187 1.000000 0.892261 0.801080 0.614441 0.810455
concavity-W 0.659610 0.526911 0.301025 0.563879 0.512606 0.434926 0.816275 0.884103 0.752399 0.433721 ... 0.573975 0.368366 0.618344 0.543331 0.518523 0.892261 1.000000 0.855434 0.532520 0.686511
concave_points-W 0.793566 0.744214 0.295316 0.771241 0.722017 0.503053 0.815573 0.861323 0.910155 0.430297 ... 0.787424 0.359755 0.816322 0.747419 0.547691 0.801080 0.855434 1.000000 0.502528 0.511114
symmetry-W 0.416294 0.163953 0.105008 0.189115 0.143570 0.394309 0.510223 0.409464 0.375744 0.699826 ... 0.243529 0.233027 0.269493 0.209146 0.493838 0.614441 0.532520 0.502528 1.000000 0.537848
fractal_dimension-W 0.323872 0.007066 0.119205 0.051019 0.003738 0.499316 0.687382 0.514930 0.368661 0.438413 ... 0.093492 0.219122 0.138957 0.079647 0.617624 0.810455 0.686511 0.511114 0.537848 1.000000

31 rows × 31 columns

In [19]:
# Create the heatmap

plt.figure(figsize=(32,24))

sns.set(font_scale=1.5)  # Adjust the font scale if needed
sns.heatmap(correlation_matrix, annot=True, cmap="coolwarm", linewidths=.5)

plt.title('Correlation Heatmap')
  # Adjust the figure size if needed
plt.show()
In [20]:
wdbc_df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 569 entries, 0 to 568
Data columns (total 31 columns):
 #   Column                Non-Null Count  Dtype  
---  ------                --------------  -----  
 0   Diagnosis             569 non-null    int64  
 1   radius-M              569 non-null    float64
 2   texture-M             569 non-null    float64
 3   perimeter-M           569 non-null    float64
 4   area-M                569 non-null    float64
 5   smoothness-M          569 non-null    float64
 6   compactness-M         569 non-null    float64
 7   concavity-M           569 non-null    float64
 8   concave_points-M      569 non-null    float64
 9   symmetry-M            569 non-null    float64
 10  fractal_dimension-M   569 non-null    float64
 11  radius-SE             569 non-null    float64
 12  texture-SE            569 non-null    float64
 13  perimeter-SE          569 non-null    float64
 14  area-SE               569 non-null    float64
 15  smoothness-SE         569 non-null    float64
 16  compactness-SE        569 non-null    float64
 17  concavity-SE          569 non-null    float64
 18  concave_points-SE     569 non-null    float64
 19  symmetry-SE           569 non-null    float64
 20  fractal_dimension-SE  569 non-null    float64
 21  radius-W              569 non-null    float64
 22  texture-W             569 non-null    float64
 23  perimeter-W           569 non-null    float64
 24  area-W                569 non-null    float64
 25  smoothness-W          569 non-null    float64
 26  compactness-W         569 non-null    float64
 27  concavity-W           569 non-null    float64
 28  concave_points-W      569 non-null    float64
 29  symmetry-W            569 non-null    float64
 30  fractal_dimension-W   569 non-null    float64
dtypes: float64(30), int64(1)
memory usage: 137.9 KB

Split the data into features and target label¶

In [21]:
result = wdbc_df['Diagnosis']
features = wdbc_df.drop('Diagnosis',axis = 1)
In [22]:
result.head()
Out[22]:
0    1
1    1
2    1
3    1
4    1
Name: Diagnosis, dtype: int64
In [23]:
features.head()
Out[23]:
radius-M texture-M perimeter-M area-M smoothness-M compactness-M concavity-M concave_points-M symmetry-M fractal_dimension-M ... radius-W texture-W perimeter-W area-W smoothness-W compactness-W concavity-W concave_points-W symmetry-W fractal_dimension-W
0 17.99 10.38 122.80 1001.0 0.11840 0.27760 0.3001 0.14710 0.2419 0.07871 ... 25.38 17.33 184.60 2019.0 0.1622 0.6656 0.7119 0.2654 0.4601 0.11890
1 20.57 17.77 132.90 1326.0 0.08474 0.07864 0.0869 0.07017 0.1812 0.05667 ... 24.99 23.41 158.80 1956.0 0.1238 0.1866 0.2416 0.1860 0.2750 0.08902
2 19.69 21.25 130.00 1203.0 0.10960 0.15990 0.1974 0.12790 0.2069 0.05999 ... 23.57 25.53 152.50 1709.0 0.1444 0.4245 0.4504 0.2430 0.3613 0.08758
3 11.42 20.38 77.58 386.1 0.14250 0.28390 0.2414 0.10520 0.2597 0.09744 ... 14.91 26.50 98.87 567.7 0.2098 0.8663 0.6869 0.2575 0.6638 0.17300
4 20.29 14.34 135.10 1297.0 0.10030 0.13280 0.1980 0.10430 0.1809 0.05883 ... 22.54 16.67 152.20 1575.0 0.1374 0.2050 0.4000 0.1625 0.2364 0.07678

5 rows × 30 columns

Machine learning models training and turning¶

In [24]:
def train_predict(model, X_train, y_train, X_val, y_val): 
    '''
    inputs:
       - learner: the learning algorithm to be trained and predicted on
       - sample_size: the size of samples (number) to be drawn from training set
       - X_train: features training set
       - y_train: income training set
       - X_test: features testing set
       - y_test: income testing set
    '''
    
    result = {}
    time_consum = {}
    
    # TODO: Fit the learner to the training data using slicing with 'sample_size' using .fit(training_features[:], training_labels[:])
    start = time() # Get start time
    model.fit(X_train, y_train)
    end = time() # Get end time
    
    # TODO: Calculate the training time
    time_consum['train_time'] = end - start
    
    
    # TODO: Get the predictions on the test set(X_test),
    #       then get predictions on the first 300 training samples(X_train) using .predict()
    start = time() # Get start time
    y_pred = model.predict(X_val)
    predictions_train = model.predict(X_train)
    end = time() # Get end time
    
    # TODO: Calculate the total prediction time
    time_consum['pred_time'] = end - start
            
        
    # TODO: Compute accuracy on test set
    result['acc_test'] = accuracy_score(y_val,y_pred)
        
    result['pec_test'] = precision_score(y_val,y_pred)
    
    # TODO: Compute F-score on testing set which is y_test
    result['f_test'] = fbeta_score(y_val, y_pred, beta=0.5)
       
    # Success
    #print("{} model output.".format(model.__class__.__name__))
     
    # Return the results
    return result,time_consum

Display the models performance by plotting the score into a side-by-side bar chart¶

In [25]:
def evaluate(typeE, dictInfo):
    tyep_graph = typeE
    if tyep_graph == 1:
        
        item_names = list(time_consum.keys())
        item_values_train = [data['train_time'] for data in time_consum.values()]
        item_values_pred = [data['pred_time'] for data in time_consum.values()]

        # Create a bar chart
        plt.figure(figsize=(20, 15))  # Optional: Adjust figure size
        bars_train = plt.bar(item_names, item_values_train, label='Training Time', width=0.5, align='center')
        bars_pred = plt.bar(item_names, item_values_pred, label='Prediction Time', width=0.5, align='edge')

        # Add values on top of the training time bars
        for bar, value in zip(bars_train, item_values_train):
            plt.text(bar.get_x() + bar.get_width() / 2, value, str(round(value, 4)), ha='center', va='bottom',rotation=45)

        # Add values on top of the prediction time bars
        for bar, value in zip(bars_pred, item_values_pred):
            plt.text(bar.get_x() + bar.get_width() / 2, value, str(round(value, 4)), ha='center', va='bottom',rotation=45)

        # Add labels and title
        plt.xlabel('Machine learninng Models', fontsize=24)
        plt.ylabel('Time consuming(seconds)', fontsize=24)
        plt.title('Training and Prediction Times for Different Models',fontsize=24)
        plt.legend(loc='lower center', bbox_to_anchor=(1, 1)) 
        # Show the chart
        plt.xticks(rotation=45)  # Optional: Rotate x-axis labels for better readability
        plt.tight_layout()  # Optional: Adjust layout for better spacing
        plt.show()
    else:
        item_names = list(results.keys())
        item_acc_score = [data['acc_test'] for data in results.values()]
        item_precision = [data['pec_test'] for data in results.values()]
        item_f_score = [data['f_test'] for data in results.values()]

        plt.figure(figsize=(20, 15)) 
        # Define the positions for each group of bars
        x = np.arange(len(item_names))

        # Set the width of each group of bars
        bar_width = 0.25

        # Create a bar chart
        bars_acc = plt.bar(x - bar_width, item_acc_score, label='Accuracy Score', width=bar_width, align='edge')
        bars_pec = plt.bar(x, item_precision, label='Precision Score', width=bar_width, align='edge')
        bars_fs = plt.bar(x + bar_width, item_f_score, label='F-score', width=bar_width, align='edge')

        # Add values on top of the bars
        for bars, values in [(bars_acc, item_acc_score), (bars_pec, item_precision), (bars_fs, item_f_score)]:
            for bar, value in zip(bars, values):
                plt.text(bar.get_x() + bar.get_width() / 2, value, str(round(value, 3)), ha='center', va='bottom', rotation=45)

        # Customize x-axis labels
        plt.xticks(x, item_names)

        # Add labels and title
        plt.xlabel('Machine learninng Models', fontsize=24)
        plt.ylabel('Score', fontsize=24)
        plt.title('Accuracy,Precision and Fbeta Score for Different Models', fontsize=24)
        plt.legend(loc='lower center', bbox_to_anchor=(1, 1)) 
        # Show the chart
        plt.xticks(rotation=45)  # Optional: Rotate x-axis labels for better readability
        #plt.tight_layout()  # Optional: Adjust layout for better spacing
        plt.show()

        

Splitting data¶

In [26]:
X_train, X_temp, y_train, y_temp = train_test_split(features, 
                                                    result, 
                                                    test_size=0.3, 
                                                    random_state=42)
X_val, X_test, y_val, y_test = train_test_split(X_temp, 
                                                y_temp, 
                                                test_size=0.5, 
                                                random_state=42)

Standard scaling¶

In [27]:
scaler = StandardScaler()
X_train_scaled  = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
In [ ]:
 
In [28]:
results = {}
time_consum = {}

# Create a Logistic Regression model
model1 = LogisticRegression()
model2 = DecisionTreeClassifier()
model3 = RandomForestClassifier()
model4 = GaussianNB()
model5 = SVC(kernel='linear')


# Train the model on the training data

models = [model1,model2,model3,model4,model5]

for i in range (5):
    model_name = models[i].__class__.__name__
    results[model_name] = {}
    time_consum [model_name] = {}
    results[model_name],time_consum[model_name] = train_predict(models[i], X_train_scaled, y_train, X_test_scaled, y_test)
In [29]:
evaluate(1, time_consum)
In [30]:
evaluate(2,results)

Fine-turning modeling after selected.¶

In [31]:
# def fine_turn(model,parameter):
    
#     clf = model

#     # TODO: Create the parameters list you wish to tune, using a dictionary if needed.
#     # HINT: parameters = {'parameter_1': [value1, value2], 'parameter_2': [value1, value2]}
#     parameters = parameter

#     # TODO: Make an fbeta_score scoring object using make_scorer()
#     beta = 0.5
#     scorer = make_scorer(fbeta_score, beta=beta)

#     # TODO: Perform grid search on the classifier using 'scorer' as the scoring method using GridSearchCV()
#     grid_obj = GridSearchCV(clf,parameters, scoring=scorer)

#     # TODO: Fit the grid search object to the training data and find the optimal parameters using fit()
#     grid_fit = grid_obj.fit(X_train, y_train)

#     # Get the estimator
#     best_clf = grid_fit.best_estimator_

#     # Make predictions using the unoptimized and model
#     predictions = (clf.fit(X_train, y_train)).predict(X_val)
#     best_predictions = best_clf.predict(X_val)

#     # Report the before-and-afterscores
#     print("Score report for {} ".format(clf.__class__.__name__))
#     print("Unoptimized model\n------")
#     print("Accuracy score on Validation data: {:.4f}".format(accuracy_score(y_val, predictions)))
#     print("Precision score on Validation data: {:.4f}".format(precision_score(y_val, predictions)))
#     print("F-score on Validation data: {:.4f}".format(fbeta_score(y_val, predictions, beta = 0.5)))

#     print("\nOptimized Model\n------")
#     print("Final accuracy score on the Validation data: {:.4f}".format(accuracy_score(y_val, best_predictions)))
#     print("Final precision score on Validation data: {:.4f}".format(precision_score(y_val,best_predictions)))
#     print("Final F-score on the Validation data: {:.4f}".format(fbeta_score(y_val, best_predictions, beta = 0.5)))
In [32]:
# clf1 = LogisticRegression()
# parameters1 = {'penalty': ['l1', 'l2'],
#        'C': [0.001, 0.01, 0.1, 1, 10],
#        'solver': ['liblinear'],
#        'max_iter': [100, 200, 300]}

# clf2 = RandomForestClassifier()
# parameters2 = {
#    'n_estimators': [100, 200, 300],
#    'max_depth': [None, 10, 20, 30],
#    'max_features': ['sqrt', 'log2', None],
# }
In [33]:
# fine_turn(clf1,parameters1)
# print('\n ============================================== \n')
# fine_turn(clf2,parameters2)
In [34]:
# def final_eval(model):
    
#     y_pred_test = model.predict(X_test)
   
#     Calculate test set metrics
#     test_accuracy = accuracy_score(y_test, y_pred_test)
#     test_precision = precision_score(y_test, y_pred_test)
#     test_f1 = fbeta_score(y_test, y_pred_test, beta=0.5)
    
#     Print test set metrics

#     print("\nFinal Evaluation with {} on Test Set:".format(model.__class__.__name__))
#     print(f"Test Accuracy: {test_accuracy:.4f}")
#     print(f"Test Precision: {test_precision:.4f}")
#     print(f"Test F1-Score: {test_f1:.4f}")
In [35]:
# for i in [clf1,clf2]:
#     final_eval(i)

Extracting Feature Importance¶

Choose a scikit-learn supervised learning algorithm that has a feature_importance_ attribute availble for it. This attribute is a function that ranks the importance of each feature when making predictions based on the chosen algorithm.

  • Extract the feature importances using '.feature_importances_'.
In [36]:
def feature_plot(importances, X_train):
    
    # Display the five most important features
    indices = np.argsort(importances)[::-1]
    columns = X_train.columns.values[indices[:5]]
    values = importances[indices][:5]

    # Creat the plot
    fig = plt.figure(figsize = (16,10))
    plt.title("Normalized Weights for First Five Most Predictive Features", fontsize = 20)
    plt.bar(np.arange(5), values, width = 0.4, align="center", color = '#00A000', \
          label = "Feature Weight")
    plt.bar(np.arange(5) - 0.3, np.cumsum(values), width = 0.2, align = "center", color = '#00A0A0', \
          label = "Cumulative Feature Weight")
    plt.xticks(np.arange(5), columns)
    plt.xlim((-0.5, 4.5))
    plt.ylim(0, 1)
    plt.ylabel("Weight", fontsize = 20)
    plt.xlabel("Feature", fontsize = 20)
    
    plt.legend(loc = 'upper center')
    plt.tight_layout()
    plt.show() 
In [37]:
#RandomForestClassifier after fine turn 

# TODO: Extract the feature importances using .feature_importances_ 
importances2 = model3.feature_importances_

# Plot
feature_plot(importances2, X_train)
In [38]:
indices = np.argsort(importances2)[::-1]
columns = X_train.columns.values[indices[:5]]
values = importances2[indices][:5]
for i in range(len(columns)):
    print("Weight for {} is: {:.4f}".format(columns[i], values[i]))
Weight for area-W is: 0.1225
Weight for perimeter-W is: 0.1157
Weight for concave_points-W is: 0.1096
Weight for concave_points-M is: 0.1021
Weight for radius-W is: 0.0808
In [39]:
# Sort the feature importances in descending order
sorted_indices = np.argsort(importances2)[::-1]
sorted_importances = importances2[sorted_indices]

# Calculate the cumulative feature weight
cumulative_importance = np.cumsum(sorted_importances)

# Print the top 5 cumulative feature weights
print("Top 5 Cumulative Feature Weights:")
for i in range(5):
    feature_index = sorted_indices[i]
    feature_weight = cumulative_importance[i]
    print("Feature {}: {:.4f}".format(feature_index, feature_weight))
Top 5 Cumulative Feature Weights:
Feature 23: 0.1225
Feature 22: 0.2382
Feature 27: 0.3478
Feature 7: 0.4500
Feature 20: 0.5308
In [40]:
plt.figure(figsize=(32,24))

sns.set(font_scale=1.5)  # Adjust the font scale if needed
sns.heatmap(correlation_matrix, annot=True, cmap="coolwarm", linewidths=.5)

plt.title('Correlation Heatmap')
  # Adjust the figure size if needed
plt.show()
In [41]:
feature_plot(importances2, X_train)

Conclusion:¶

With the results obtained from a high-performance model and the display of the correlation metric, we reject the null hypothesis. There is clear evidence that certain features in the breast cancer dataset are more important than others in predicting the malignancy of tumors.¶

In [ ]: